perm filename FOUNDA[E85,JMC] blob
sn#806922 filedate 1985-10-26 generic text, type C, neo UTF8
COMMENT ā VALID 00003 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 founda[e85,jmc] Foundations of AI - For New Mexico Partridge conference
C00009 00003 "yorick%nmsu.csnet"@csnet-relay
C00018 ENDMK
Cā;
founda[e85,jmc] Foundations of AI - For New Mexico Partridge conference
AI Reasoning Should be Logic with Extensions
by John McCarthy, Stanford University
Leibniz proposed to replace arguing by calculation.
He wanted a logical system in which consequences would follow
from agreed premisses, but he didn't get very far. In fact
he didn't even invent propositional calculus, although he
did see that truth values could be represented by 0 and 1.
It requires explanation that he didn't get further, because
propositional calculus is mathematically far simpler than
the infinitesimal calculus of which he was the co-inventor along
with Newton. The likely answer is that he didn't see that
logical computation is quite separate from numerical computation,
and the familiar arithmetic operations are inappropriate.
150 years later Boole did invent propositional calculus
and entitled his book {\it The Laws of Thought}, showing that
he was trying to do much more than found a technical branch of
mathematics. He didn't invent predicate calculus, but that had
to wait a mere 35 years more for Frege. Frege also had ambitions
beyond mathemtics. However, there wasn't much success until
recently towards the goals of Leibniz, Boole and Frege of applying
logical methods to reasoning formally in a practical way about
the common sense world. Since the beginning of AI research in
the 1950s there have been renewed efforts, and in the 1980s these
efforts have intensified. However, between Frege's time and
recent AI work, there developed a body of conventional opinion
that logic was somehow unsuitable for reasoning about the non-mathematical
world. However, the alternatives to logic that were explored weren't very
fruitful.
I have mentioned this history to support the view that
it is indeed very difficult to determine the ``laws of thought''.
We aren't there yet, and it may take another 300 years.
When I started work on formalizing common sense knowledge and
reasoning in my 1958 ``Programs with Common Sense'' I thought
that mathematical logical languages were the right medium of expression
and appropriately controlled logical deduction was the right basic
reasoning tool. I still like logical languages, but I think
we need improvements in the way we handle incompletely formalized
concepts. I still like logical deduction, but I think it has
to be supplemented by non-monotonic methods of inference
that produce plausible rather than necessary conclusions.
While I and others have produced several versions of one method of
non-monotonic reasoning, namely circumscription, I don't think that
present forms of circumscription are the last word.
This paper aims at two goals --- to defend
logic against its detractors and to propose directions for
extending and applying it to common sense knowledge and reasoning.
Besides discussing non-monotonic reasoning, I will also discuss
two ideas that I have recently begun to explore: formalizing the
notion of context and a ``mental situation calculus''.
There is a kind of standard history of the use of logic
in AI which is only a first approximation to the truth and needs
some correction. Namely, the situation calculus became popular
in the late sixties along with the idea that a general purpose
theorem prover would be an adequate reasoning mechanism. Experiments,
mainly at SRI, showed that logic was inadequate, because the programs
were too slow. This led to STRIPS, as a restriction of logic and
also to systems that were much less logical.
In my view the early programs using situation calculus were
inadequate for several reasons. First it was a mistake to suppose
that a general purpose theorem prover would be adequate. Indeed
my (1960) already proposes that sentences would be selected for
inference as a consequence of meta-reasoning. Second and most
important, a decision to use logic doesn't determine the specific
predicates and functions that will be used, i.e. doesn't determine
the actual ``language'' to use the logicians' terminology.
The languages chosen were inadequate, and improving the languages,
i.e. the epistemological part of the problem, is the main key to success.
Finally the early 70s emphasis on short term goals by the funding
agencies did great harm to long range research, as it may be doing
today.
"yorick%nmsu.csnet"@csnet-relay
My position paper
follows. Please acknowledge its receipt, since I'm uncertain about
net function. Occurrences of {\it ...} mean that the enclosed
text is to be in italics, i.e. they are TEXisms.
AI Reasoning Should be Logic with Extensions
by John McCarthy, Stanford University
Leibniz, Boole and Frege all considered that the purpose of
mathematical logic was to embody the ``laws of thought'' as Boole called
his book. If their efforts had been completely successful, AI would
have been achieved long ago. Nevertheless, mathematical logic does
embody a good part of the laws of thought, and the approach to AI
that I have been following since the late 1950s is to use it to
formalize common sense knowledge and to extend it where necessary.
I still think that this approach is likely to succeed sooner than
the others I know about, although I think there are still major
conceptual difficulties to be overcome.
The difficulties are of several kinds.
1. We aren't good at expressing common sense knowledge as
logical theories. Indeed much commmon sense knowledge isn't expressed
in natural language either. To repeat an example I have often used,
a person's knowledge of what to expect when a coffee cup spills
isn't expressible by him in natural language. When two people agree
on the correctness of a verbal argument that involves knowledge
of such physical phenomena, part of the argument involves appeal to such
currently unformalized and unverbalized facts. I don't take the
fact that they are unexpressed in natural as evidence that they
cannot be expressed in either logic or natural language. Both
natural language and logic have developed over time abilities
to express kinds of facts that previously weren't expressed.
2. Logical deduction is always monotonic, and much
human reasoning is non-monotonic. I and others have been using
the circumscription formalism for doing non-monotonic reasoning
to express common sense facts and also working to improve the
formalism.
3. We don't yet have adequate ways of expressing the
facts necessary to control reasoning. It is supposed that some
people believed that general purpose theorem proving strategies
were adequate for programs that reasoned from facts to a plan
for achieving a goal. I'm not sure who believed this, but it wasn't
me. Anyway the result of inadequate means of specifying control is
combinatorial explosion. The control needs to include domain specific
facts.
4. We need to combine logical reasoning with other
kinds of computation using other data structures. In order to do this
while keeping the logic on top, we need to be able to describe in logic
what other kinds of computation and data structures do, so that the
logical ``boss'' of the system can decide how to use the other forms of
computation.
Relations with philosophy
Many of the problems of representing and using common sense
knowledge that AI faces have also been treated by philosophers.
It seems to me that success in AI, and very likely also in philosophy,
requires a different approach than philosophers have heretofore wsed.
There is an alternative to looking at phenomena like knowledge and
consciousness as though they were definite concepts only needing to be
understood and explained. Instead one can build small theories that
cover only certain cases of these phenomena. These theories will be
adequate for intelligent behavior in limited domains and will be
capable of extension. The philosophers' habit of regarding such
phenomena as {\it natural kinds} leads to considering exotic cases,
where it seems that the philosopher is inventing the extension rather
than discovering it. Anyway building theories of limited scope is
what AI has to do in order to get programs working before all the
problems of philosophy have been solved. Formalized non-monotonic
reasoning is essential here, because it allows us to express theories
that cover the simple cases but don't have to be contradicted when
extended to be more general.
Where does AI in logic go from here?
The largest single need is to formalize more domains of
common sense knowledge in epistemologically adequate, ambiguity
tolerant, elaboration tolerant, and heuristically adequate ways.
In the oral presentation, I'll discuss what this means.
Almost all AI systems formalize common sense concepts in overly
specialized ways.
On the heuristic side, the main need is better ways of
expressing declaratively the control of reasoning.
I am presently exploring two other ideas for improving
the use of logic in AI. The first is called the {\it mental
situation calculus}. It applies the ideas of the situation
calculus to formalizing mental situations with mental actions
like deduction, observation and circumscription, and mental
goals like coming to know or to understand.
The second idea involves formalization of the notion
of context. This formalization involves trade-off axioms
that allow generalizing the context at the cost of elaborating
the sentences or simplifying the sentences at the cost of
specializing the context.